Goto

Collaborating Authors

 ai chatbot


The Fight to Hold AI Companies Accountable for Children's Deaths

WIRED

The Fight to Hold AI Companies Accountable for Children's Deaths After a series of suicides allegedly linked to AI chatbots, one lawyer is trying to hold companies like OpenAI accountable. Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn't see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.


AI chatbots can effectively sway voters – in either direction

AIHub

The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."


Race for AI is making Hindenburg-style disaster 'a real risk', says leading expert

The Guardian

Race for AI is making Hindenburg-style disaster'a real risk', says leading expert The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned. Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products' capabilities and potential flaws are fully understood. The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said. "It's the classic technology scenario," he said. "You've got a technology that's very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable."


No free pass for internet platforms on child safety, Starmer says

BBC News

No online platform will get a free pass on children's safety on the internet in new plans, Prime Minister Sir Keir Starmer has said. The government is pledging to close loopholes in existing laws designed to protect children online and will consult on a social media ban for under-16s as part of plans for online safety. There are also plans to introduce powers to speedily change the law in response to developing online behaviours, and to update legislation to preserve children's social media and online data - as campaigned for by the group Jools' Law. Opponents accused the government of inaction, and have called for Parliament to be given a vote on the social media ban for children. The government had already said it would launch the public consultation in March, seeking opinions about restricting children's access to AI chatbots and limiting infinite scrolling features for children - also known as doomscrolling.


Starmer to extend online safety rules to AI chatbots after Grok scandal

The Guardian

The government said it would close a legal loophole in the Online Safety Act. The government said it would close a legal loophole in the Online Safety Act. Starmer to announce'crackdown on vile illegal content created by AI' after scandal involving Elon Musk's Grok tool Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday. Emboldened by Elon Musk's X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a "crackdown on vile illegal content created by AI". With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would "move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law".


'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness 'crisis'

BBC News

'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness'crisis' Working from home after years spent alone over Covid lockdowns, 23-year-old Paisley said he began to feel trapped, and felt only AI could help him. I lost the ability to socialise, he said, and like many in Gen Z, he turned to AI for company. At one point, I was talking to ChatGPT six, seven, eight times a day about my problems, I just couldn't get away from it, it was a dangerous slope. He shared his experience of loneliness with 22-year-old documentary maker Sam Tullen, who told the BBC what Paisley was going through was part of a wider Gen Z loneliness crisis. Gen Z, a term used for those born between 1997 and 2012, often referred to as the first'digital native' generation.


Meta Seeks to Bar Mentions of Mental Health--and Zuckerberg's Harvard Past--From Child Safety Trial

WIRED

The trial starts soon in New Mexico's case against Meta--and the company is pulling out all the stops to protect its reputation. As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings. The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta's financial resources, the personal activities of employees, and Mark Zuckerberg's time as a student at Harvard University. Meta's requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.


Can AI chatbots trigger psychosis in vulnerable people?

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .


New Scientist changed the UK's freedom of information laws in 2025

New Scientist

New Scientist changed the UK's freedom of information laws in 2025 By requesting copies of the then-UK technology secretary's ChatGPT logs, New Scientist set a precedent for how freedom of information laws apply to chatbot interactions, helping to hold governments to account Our successful request for Peter Kyle's ChatGPT logs stunned observers When I fired off an email at the start of 2025, I hadn't intended to set a legal precedent for how the UK government handles its interactions with AI chatbots, but that is exactly what happened. It all began in January when I read an interview with the then-UK tech secretary Peter Kyle in . Trying to suggest he used first-hand the technology his department was set up to regulate, Kyle said that he would often have conversations with ChatGPT. AI may blunt our thinking skills - here's what you can do about it That got me wondering: could I obtain his chat history? Freedom of information (FOI) laws are often deployed to obtain emails and other documents produced by public bodies, but past precedent has suggested that some private data - such as search queries - aren't eligible for release in this way. I was interested to see which way the chatbot conversations would be categorised.


Are these AI prompts damaging your thinking skills?

BBC News

Are these AI prompts damaging your thinking skills? What was the last thing you asked an AI chatbot to do for you? Maybe you asked it for an essay structure to help answer a tricky question, provide an insightful analysis of a chunky data set, or to check if your cover letter matches the job description. Some experts worry that outsourcing these kinds of tasks means your brain is working less - and could even be harming your critical thinking and problem-solving skills. Earlier this year, the Massachusetts Institute of Technology (MIT) published a study showing that people who used ChatGPT to write essays showed less activity in brain networks associated with cognitive processing while undertaking the exercise.